11 research outputs found

    Reinforcement Learning Approaches in Social Robotics

    Full text link
    This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field

    Perceived Safety in Social Human-Robot Interaction

    No full text
    This compilation thesis contributes to a deeper understanding of perceived safety in human-robot interaction (HRI) with a particular focus on social robots. The current understanding of safety in HRI is mostly limited to physical safety, whereas perceived safety has often been neglected and underestimated. However, safe HRI requires a conceptualization of safety that goes beyond physical safety covering also perceived safety of the users. Within this context, this thesis provides a comprehensive analysis of perceived safety in HRI with social robots, considering a diverse set of human-related and robot-related factors. Two particular challenges for providing perceived safety in HRI are 1) understanding and evaluating human safety perception through direct and indirect measures, and 2) utilizing the measured level of perceived safety for adapting the robot behaviors. The primary contribution of this dissertation is in addressing the first challenge. The thesis investigates perceived safety in HRI by alternating between conducting user studies, literature review, and testing the findings from the literature within user studies. In this thesis, six main factors influencing perceived safety in HRI are lifted: the context of robot use, the user’s comfort, experience and familiarity with robots, trust, sense of control over the interaction, and transparent and predictable robot behaviors. These factors could provide a common understanding of perceived safety and bridge the theoretical gap in the literature. Moreover, this thesis proposes an experimental paradigm to observe and quantify perceived safety using objective and subjective measures. This contributes to bridging the methodological gap in the literature. The six factors are reviewed in HRI literature, and the robot features that affect these factors are organized in a taxonomy. Although this taxonomy focuses on social robots, the identified characteristics are relevant to other types of robots and autonomous systems. In addition to the taxonomy, the thesis provides a set of guidelines for providing perceived safety in social HRI. As a secondary contribution, the thesis presents an overview of reinforcement learning applications in social robotics as a suitable learning mechanism for adapting the robots’ behaviors to mitigate psychological harm

    Perceived Safety in Social Human-Robot Interaction

    No full text
    This compilation thesis contributes to a deeper understanding of perceived safety in human-robot interaction (HRI) with a particular focus on social robots. The current understanding of safety in HRI is mostly limited to physical safety, whereas perceived safety has often been neglected and underestimated. However, safe HRI requires a conceptualization of safety that goes beyond physical safety covering also perceived safety of the users. Within this context, this thesis provides a comprehensive analysis of perceived safety in HRI with social robots, considering a diverse set of human-related and robot-related factors. Two particular challenges for providing perceived safety in HRI are 1) understanding and evaluating human safety perception through direct and indirect measures, and 2) utilizing the measured level of perceived safety for adapting the robot behaviors. The primary contribution of this dissertation is in addressing the first challenge. The thesis investigates perceived safety in HRI by alternating between conducting user studies, literature review, and testing the findings from the literature within user studies. In this thesis, six main factors influencing perceived safety in HRI are lifted: the context of robot use, the user’s comfort, experience and familiarity with robots, trust, sense of control over the interaction, and transparent and predictable robot behaviors. These factors could provide a common understanding of perceived safety and bridge the theoretical gap in the literature. Moreover, this thesis proposes an experimental paradigm to observe and quantify perceived safety using objective and subjective measures. This contributes to bridging the methodological gap in the literature. The six factors are reviewed in HRI literature, and the robot features that affect these factors are organized in a taxonomy. Although this taxonomy focuses on social robots, the identified characteristics are relevant to other types of robots and autonomous systems. In addition to the taxonomy, the thesis provides a set of guidelines for providing perceived safety in social HRI. As a secondary contribution, the thesis presents an overview of reinforcement learning applications in social robotics as a suitable learning mechanism for adapting the robots’ behaviors to mitigate psychological harm

    Reinforcement Learning Approaches in Social Robotics

    No full text
    This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field

    The Influence of Feedback Type in Robot-Assisted Training

    No full text
    Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback type on robot acceptance, sense of safety and security, attitude towards robots and task performance. In the experiment, 23 older participants performed basic arm exercises with a social robot as a guide and received feedback. Different feedback conditions were administered, such as flattering, positive and negative feedback. Our results suggest that the robot with flattering and positive feedback was appreciated by older people in general, even if the feedback did not necessarily correspond to objective measures such as performance. Participants in these groups felt better about the interaction and the robot.SOCRATES - Marie Skłodowska-Curie grant agreement No 72161

    Understanding Cultural Preferences for Social Robots: A Study in German and Arab Communities

    Get PDF
    This article presents a study of cultural differences affecting the acceptance and design preferences of social robots. Based on a survey with 794 participants from Germany and the three Arab countries of Egypt, Jordan, and Saudi Arabia, we discuss how culture influences the preferences for certain attributes. We look at social roles, abilities and appearance, emotional awareness and interactivity of social robots, as well as the attitude toward automation. Preferences were found to differ not only across cultures, but also within countries with similar cultural backgrounds. Our findings also show a nuanced picture of the impact of previously identified culturally variable factors, such as attitudes toward traditions and innovations. While the participants’ perspectives toward traditions and innovations varied, these factors did not fully account for the cultural variations in their perceptions of social robots. In conclusion, we believe that more real-life practices emerging from the situated use of robots should be investigated. Besides focusing on the impact of broader cultural values such as those associated with religion and traditions, future studies should examine how users interact, or avoid interaction, with robots within specific contexts of use

    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security

    No full text
    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.SOCRATE

    Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.

    No full text
    This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve the user experience. For this purpose, we aim to use Deep Reinforcement Learning. The robot will observe the user’s verbal and nonverbal social cues by using its camera and microphone, the reward will be positive valence and engagement of the user.SOCRATE
    corecore